government & the courts
AI App that tells defendant what to say in court used for first time - Talker
A smartphone app that tells a defendant what to say in court using artificial intelligence has been used for the first time - and is a lot cheaper than a lawyer. It is the first time artificial intelligence (AI) has been used in a trial anywhere in the world. The neural network will listen to all speeches from witnesses, lawyers and the judge. The defendant will be told exactly what to say via an earpiece - sticking to only those words. Legal history is being made over a speeding fine.
- Law > Government & the Courts (0.32)
- Government > Regional Government (0.32)
- Government > Immigration & Customs (0.32)
Robot lawyer takes its first case: Hearing next month will see the defendant get advice from AI
A court hearing in February is set to make history when the defendant is advised by artificial intelligence. The technology stems from the company DoNotPay, founded in 2015 by a then-Stanford University freshman, that was initially developed to appeal parking tickets. The world's first robot lawyer will run on the defendant's smartphone and listen to commentary to provide its client with instructions on what to say in arguments. The courthouse location, charges and name of the defendant have not been revealed, according to New Scientist. Joshua Browder initially created the robot to appeal parking tickets in the UK when he first launched the technology but has since expanded it to the US.
- Europe > United Kingdom (0.25)
- Asia > China (0.21)
- North America > United States (0.16)
The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI
There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.
- Research Report > New Finding (1.00)
- Press Release (1.00)
- Overview (1.00)
- (2 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Telecommunications (1.00)
- (60 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
Topic Modeling with Wasserstein Autoencoders
Nan, Feng, Ding, Ran, Nallapati, Ramesh, Xiang, Bing
We propose a novel neural topic model in the Wasserstein autoencoders (WAE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models.
- North America > United States > California (0.67)
- Asia > Middle East > Iraq (0.67)
- Europe > Germany (0.67)
- (46 more...)
- Transportation > Ground (1.00)
- Media > Television (1.00)
- Media > Music (1.00)
- (32 more...)
Two-stage Algorithm for Fairness-aware Machine Learning
Komiyama, Junpei, Shimao, Hajime
Algorithmic decision making process now affects many aspects of our lives. Standard tools for machine learning, such as classification and regression, are subject to the bias in data, and thus direct application of such off-the-shelf tools could lead to a specific group being unfairly discriminated. Removing sensitive attributes of data does not solve this problem because a \textit{disparate impact} can arise when non-sensitive attributes and sensitive attributes are correlated. Here, we study a fair machine learning algorithm that avoids such a disparate impact when making a decision. Inspired by the two-stage least squares method that is widely used in the field of economics, we propose a two-stage algorithm that removes bias in the training data. The proposed algorithm is conceptually simple. Unlike most of existing fair algorithms that are designed for classification tasks, the proposed method is able to (i) deal with regression tasks, (ii) combine explanatory attributes to remove reverse discrimination, and (iii) deal with numerical sensitive attributes. The performance and fairness of the proposed algorithm are evaluated in simulations with synthetic and real-world datasets.
- Oceania > Australia (0.28)
- Europe > Spain (0.14)
- North America > United States > Florida > Broward County (0.14)
- (2 more...)
Artificial intelligence prevails at predicting Supreme Court decisions
Artificial intelligence can predict Supreme Court decisions better than some experts. Decision outcomes included whether the court reversed a lower court's decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. "Every time we've kept score, it hasn't been a terribly pretty picture for humans," says the study's lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.
- Law > Government & the Courts (1.00)
- Law Enforcement & Public Safety (1.00)
- Government > Regional Government > > > > > > > North America Government (1.00)
- (2 more...)
- Law > Government & the Courts (1.00)
- Government > Regional Government > > > > > > > North America Government (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Europe (1.00)
- North America > United States > Massachusetts (0.46)
- Government > Regional Government > > > > > > > North America Government (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Education (1.00)
- (3 more...)